Homeostasis-Inspired Continual Learning: Learning to Control Structural Regularization
نویسندگان
چکیده
Learning continually without forgetting might be one of the ultimate goals for building artificial intelligence (AI). However, unless there are enough resources equipped, knowledge acquired in past is inevitable. Then, we can naturally pose a fundamental question about how to control what and much it forget improve overall accuracy. To give clear answer it, propose novel trainable network termed homeostatic meta-model . The proposed neuromorphic framework natural extension conventional concept Synaptic Plasticity (SP) further optimizing accuracy continual learning. In preceding works on SP its variations, though they seek important parameters structural regularization, care less intensity regularization (IoR). Per contra, this work reveals that careful selection IoR during training remarkably tasks. method balances between newly learned previously-acquired ones rather than biasing specific task or evenly balancing. obtain effective optimal IoRs real-time learning circumstances, homeostasis-inspired meta architecture. automatically controls by capturing from previous tasks current direction. We provide experimental results considering various types showing notably outperforms methods terms forgetting. also show relatively stable robust compared existing SP-based methods. Furthermore, generated our model interestingly appears proactively controlled within range, which resembles negative feedback mechanism homeostasis synapses.
منابع مشابه
Learning Fair Classifiers: A Regularization-Inspired Approach
We present a regularization-inspired approach for reducing bias in learned classifiers. In particular, we focus on binary classification tasks over individuals from two populations, where, as our criterion for fairness, we wish to achieve similar false positive rates in both populations, and similar false negative rates in both populations. As a proof of concept, we implement our approach and e...
متن کاملVariational Continual Learning
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and enti...
متن کاملLearning to Forget: Continual Prediction with LSTM
Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the ...
متن کاملContinual Robot Learning withConstructive Neural
In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-a...
متن کاملContinual Learning for Mobile Robots
Autonomous mobile robots should be able to learn incrementally and adapt to changes in the operating environment during their entire lifetime. This is referred to as continual learning. In this thesis, I propose an approach to continual learning which is based on adaptive state-space quantisation and reinforcement learning. Representational tools for continual learning should be constructive, a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2021
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2021.3050176